作为对定量设定理论推理的贡献,提出了形式的文字的连词的翻译$ x = y \ setminus z $,$ x \ neq y \ setminus z $,而$ z = \ {x} $ ,其中$ x,y,z $代表在von Neumann Universe的von neumann Universe of sets,进入了一个相当简单的联合正常形式的无关的布尔公式。目标语言中的公式涉及在集合的布尔环上方的变量以及指定平等,非脱节和包含的差分运算符和重构。而且,每个转换的结果是字符的形式$ x = y \ setminus z $,$ x \ neq y \ setminus z $和孤立文字的暗示,其后果是夹杂物(严格或变量之间的非严格)或变量之间的相位性。除了反映简单自然的语义之外,该语义确保了保持性保存,所提出的翻译具有二次算法的时间复杂性,并且桥梁两种语言都已知具有NP完全可靠性问题。
translated by 谷歌翻译
The fifth generation of the Radio Access Network (RAN) has brought new services, technologies, and paradigms with the corresponding societal benefits. However, the energy consumption of 5G networks is today a concern. In recent years, the design of new methods for decreasing the RAN power consumption has attracted interest from both the research community and standardization bodies, and many energy savings solutions have been proposed. However, there is still a need to understand the power consumption behavior of state-ofthe-art base station architectures, such as multi-carrier active antenna units (AAUs), as well as the impact of different network parameters. In this paper, we present a power consumption model for 5G AAUs based on artificial neural networks. We demonstrate that this model achieves good estimation performance, and it is able to capture the benefits of energy saving when dealing with the complexity of multi-carrier base stations architectures. Importantly, multiple experiments are carried out to show the advantage of designing a general model able to capture the power consumption behaviors of different types of AAUs. Finally, we provide an analysis of the model scalability and the training data requirements.
translated by 谷歌翻译
We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space. In particular, we consider possibly data dependent subspaces spanned by a random subset of the data, recovering as a special case Nystrom approaches for kernel methods. Considering random subspaces naturally leads to computational savings, but the question is whether the corresponding learning accuracy is degraded. These statistical-computational tradeoffs have been recently explored for the least squares loss and self-concordant loss functions, such as the logistic loss. Here, we work to extend these results to convex Lipschitz loss functions, that might not be smooth, such as the hinge loss used in support vector machines. This unified analysis requires developing new proofs, that use different technical tools, such as sub-gaussian inputs, to achieve fast rates. Our main results show the existence of different settings, depending on how hard the learning problem is, for which computational efficiency can be improved with no loss in performance.
translated by 谷歌翻译
The 1$^{\text{st}}$ Workshop on Maritime Computer Vision (MaCVi) 2023 focused on maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicle (USV), and organized several subchallenges in this domain: (i) UAV-based Maritime Object Detection, (ii) UAV-based Maritime Object Tracking, (iii) USV-based Maritime Obstacle Segmentation and (iv) USV-based Maritime Obstacle Detection. The subchallenges were based on the SeaDronesSee and MODS benchmarks. This report summarizes the main findings of the individual subchallenges and introduces a new benchmark, called SeaDronesSee Object Detection v2, which extends the previous benchmark by including more classes and footage. We provide statistical and qualitative analyses, and assess trends in the best-performing methodologies of over 130 submissions. The methods are summarized in the appendix. The datasets, evaluation code and the leaderboard are publicly available at https://seadronessee.cs.uni-tuebingen.de/macvi.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
移动网络第五代(5G)的能源消耗是电信行业的主要关注点之一。但是,目前没有一种评估5G基站(BSS)功耗的准确且可进行的方法。在本文中,我们提出了一个新颖的模型,以实现5G多载波BSS功耗的现实表征,该模型以大型数据收集活动为基础。首先,我们定义了允许对多个5G BS产品进行建模的机器学习体系结构。然后,我们利用该框架收集的知识来得出一个现实且可分析的功耗模型,这可以帮助推动理论分析以及功能标准化,开发和优化框架。值得注意的是,我们证明了这种模型具有很高的精度,并且能够捕获节能机制的好处。我们认为,该分析模型是理解5G BSS功耗的基本工具,并准确地优化了网络能源效率。
translated by 谷歌翻译
DeepMind的游戏理论与多代理团队研究多学科学习的几个方面,从计算近似值到游戏理论中的基本概念,再到在富裕的空间环境中模拟社会困境,并在困难的团队协调任务中培训3-D类人动物。我们小组的一个签名目的是使用DeepMind在DeepMind中提供的资源和专业知识,以深入强化学习来探索复杂环境中的多代理系统,并使用这些基准来提高我们的理解。在这里,我们总结了我们团队的最新工作,并提出了一种分类法,我们认为这重点介绍了多代理研究中许多重要的开放挑战。
translated by 谷歌翻译
该技术报告描述了在Robocup SPL(Mario)中计算视觉统计的模块化且可扩展的体系结构,该结构在Robocup 2022的SPL Open Research Challenge期间提出,该挑战在曼谷(泰国)举行。马里奥(Mario)是一个开源的,可用的软件应用程序,其最终目标是为Robocup SPL社区的发展做出贡献。Mario带有一个GUI,该GUI集成了多个机器学习和基于计算机视觉的功能,包括自动摄像机校准,背景减法,同型计算,玩家 +球跟踪和本地化,NAO机器人姿势估计和跌落检测。马里奥(Mario)被排名第一。1在开放研究挑战中。
translated by 谷歌翻译
研究了拉曼放大器优化的问题。使用机器学习(ML)获得了拉曼增益系数的可区分插值函数,该函数允许对前向传播拉曼泵的梯度下降优化。然后,针对任意数据通道负载和跨度长度优化了向前泵送配置中任意数量的泵的频率和功率。向前倾斜的拉曼放大器的实验训练的ML模型将正向传播模型结合在一起,以共同优化前向放大器泵的频率和功率以及向后放大器泵的功率。对于250 km的未重新曝光,展示了关节向前和向后放大器的优化。超过4 THz的增益平坦度为$ <$ 1〜 dB。使用数值模拟器验证了优化的放大器。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译